weave.init()
is called.
Traces
It’s important to store traces of LLM applications in a central database, both during development and in production. You’ll use these traces for debugging, and as a dataset that will help you improve your application.Note: When using LiteLLM, make sure to import the library usingWeave will automatically capture traces for LiteLLM. You can use the library as usual, start by callingimport litellm
and call the completion function withlitellm.completion
instead offrom litellm import completion
. This ensures that all functions and parameters are correctly referenced.
weave.init()
:
Wrapping with your own ops
Weave ops make results reproducible by automatically versioning code as you experiment, and they capture their inputs and outputs. Simply create a function decorated with@weave.op()
that calls into LiteLLM’s completion function and Weave will track the inputs and outputs for you. Here’s an example:
Create a Model
for easier experimentation
Organizing experimentation is difficult when there are many moving pieces. By using the Model
class, you can capture and organize the experimental details of your app like your system prompt or the model you’re using. This helps organize and compare different iterations of your app.
In addition to versioning code and capturing inputs/outputs, Models capture structured parameters that control your application’s behavior, making it easy to find what parameters worked best. You can also use Weave Models with serve
, and Evaluations.
In the example below, you can experiment with different models and temperatures:
Function Calling
LiteLLM supports function calling for compatible models. Weave will automatically track these function calls.